Skip to content

Conversation

@localai-bot
Copy link
Contributor

Bump of ggerganov/llama.cpp version

@localai-bot localai-bot force-pushed the update/CPPLLAMA_VERSION branch from 1740fff to b1a1ec7 Compare January 6, 2024 20:05
@netlify
Copy link

netlify bot commented Jan 6, 2024

Deploy Preview for localai ready!

Name Link
🔨 Latest commit e50f843
🔍 Latest deploy log https://app.netlify.com/sites/localai/deploys/659b04161787be0008ee566f
😎 Deploy Preview https://deploy-preview-1558--localai.netlify.app
📱 Preview on mobile
Toggle QR Code...

QR Code

Use your smartphone camera to open QR code link.

To edit notification comments on pull requests, go to your Netlify site configuration.

@localai-bot localai-bot force-pushed the update/CPPLLAMA_VERSION branch from b1a1ec7 to e50f843 Compare January 7, 2024 20:05
@mudler mudler merged commit 574fa67 into mudler:master Jan 7, 2024
truecharts-admin referenced this pull request in trueforge-org/truecharts Jan 8, 2024
….0 by renovate (#17044)

This PR contains the following updates:

| Package | Update | Change |
|---|---|---|
| [docker.io/localai/localai](https://togithub.com/mudler/LocalAI) |
minor | `v2.4.1-cublas-cuda11-ffmpeg-core` ->
`v2.5.0-cublas-cuda11-ffmpeg-core` |

---

> [!WARNING]
> Some dependencies could not be looked up. Check the Dependency
Dashboard for more information.

---

### Release Notes

<details>
<summary>mudler/LocalAI (docker.io/localai/localai)</summary>

### [`v2.5.0`](https://togithub.com/mudler/LocalAI/releases/tag/v2.5.0)

[Compare
Source](https://togithub.com/mudler/LocalAI/compare/v2.4.1...v2.5.0)

<!-- Release notes generated using configuration in .github/release.yml
at master -->

##### What's Changed

This release adds more embedded models, and shrink image sizes.

You can run now `phi-2` ( see
[here](https://localai.io/basics/getting_started/#running-popular-models-one-click)
for the full list ) locally by starting localai with:

    docker run -ti -p 8080:8080 localai/localai:v2.5.0-ffmpeg-core phi-2

LocalAI accepts now as argument a list of short-hands models and/or URLs
pointing to valid yaml file. A popular way to host those files are
Github gists.

For instance, you can run `llava`, by starting `local-ai` with:

```bash
docker run -ti -p 8080:8080 localai/localai:v2.5.0-ffmpeg-core https://raw.githubusercontent.com/mudler/LocalAI/master/embedded/models/llava.yaml
```

##### Exciting New Features 🎉

- feat: more embedded models, coqui fixes, add model usage and
description by [@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/1556](https://togithub.com/mudler/LocalAI/pull/1556)

##### 👒 Dependencies

- deps(conda): use transformers-env with vllm,exllama(2) by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/1554](https://togithub.com/mudler/LocalAI/pull/1554)
- deps(conda): use transformers environment with autogptq by
[@&#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/1555](https://togithub.com/mudler/LocalAI/pull/1555)
- ⬆️ Update ggerganov/llama.cpp by
[@&#8203;localai-bot](https://togithub.com/localai-bot) in
[https://github.com/mudler/LocalAI/pull/1558](https://togithub.com/mudler/LocalAI/pull/1558)

##### Other Changes

- ⬆️ Update docs version mudler/LocalAI by
[@&#8203;localai-bot](https://togithub.com/localai-bot) in
[https://github.com/mudler/LocalAI/pull/1557](https://togithub.com/mudler/LocalAI/pull/1557)

**Full Changelog**:
mudler/LocalAI@v2.4.1...v2.5.0

</details>

---

### Configuration

📅 **Schedule**: Branch creation - "before 10pm on monday" in timezone
Europe/Amsterdam, Automerge - At any time (no schedule defined).

🚦 **Automerge**: Enabled.

♻ **Rebasing**: Whenever PR becomes conflicted, or you tick the
rebase/retry checkbox.

🔕 **Ignore**: Close this PR and you won't be reminded about this update
again.

---

- [ ] <!-- rebase-check -->If you want to rebase/retry this PR, check
this box

---

This PR has been generated by [Renovate
Bot](https://togithub.com/renovatebot/renovate).

<!--renovate-debug:eyJjcmVhdGVkSW5WZXIiOiIzNy4xMjcuMCIsInVwZGF0ZWRJblZlciI6IjM3LjEyNy4wIiwidGFyZ2V0QnJhbmNoIjoibWFzdGVyIn0=-->
GabrielBarzen referenced this pull request in GabrielBarzen/charts Feb 2, 2024
….0 by renovate (trueforge-org#17044)

This PR contains the following updates:

| Package | Update | Change |
|---|---|---|
| [docker.io/localai/localai](https://togithub.com/mudler/LocalAI) |
minor | `v2.4.1-cublas-cuda11-ffmpeg-core` ->
`v2.5.0-cublas-cuda11-ffmpeg-core` |

---

> [!WARNING]
> Some dependencies could not be looked up. Check the Dependency
Dashboard for more information.

---

### Release Notes

<details>
<summary>mudler/LocalAI (docker.io/localai/localai)</summary>

### [`v2.5.0`](https://togithub.com/mudler/LocalAI/releases/tag/v2.5.0)

[Compare
Source](https://togithub.com/mudler/LocalAI/compare/v2.4.1...v2.5.0)

<!-- Release notes generated using configuration in .github/release.yml
at master -->

##### What's Changed

This release adds more embedded models, and shrink image sizes.

You can run now `phi-2` ( see
[here](https://localai.io/basics/getting_started/#running-popular-models-one-click)
for the full list ) locally by starting localai with:

    docker run -ti -p 8080:8080 localai/localai:v2.5.0-ffmpeg-core phi-2

LocalAI accepts now as argument a list of short-hands models and/or URLs
pointing to valid yaml file. A popular way to host those files are
Github gists.

For instance, you can run `llava`, by starting `local-ai` with:

```bash
docker run -ti -p 8080:8080 localai/localai:v2.5.0-ffmpeg-core https://raw.githubusercontent.com/mudler/LocalAI/master/embedded/models/llava.yaml
```

##### Exciting New Features 🎉

- feat: more embedded models, coqui fixes, add model usage and
description by [@&trueforge-org#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/1556](https://togithub.com/mudler/LocalAI/pull/1556)

##### 👒 Dependencies

- deps(conda): use transformers-env with vllm,exllama(2) by
[@&trueforge-org#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/1554](https://togithub.com/mudler/LocalAI/pull/1554)
- deps(conda): use transformers environment with autogptq by
[@&trueforge-org#8203;mudler](https://togithub.com/mudler) in
[https://github.com/mudler/LocalAI/pull/1555](https://togithub.com/mudler/LocalAI/pull/1555)
- ⬆️ Update ggerganov/llama.cpp by
[@&trueforge-org#8203;localai-bot](https://togithub.com/localai-bot) in
[https://github.com/mudler/LocalAI/pull/1558](https://togithub.com/mudler/LocalAI/pull/1558)

##### Other Changes

- ⬆️ Update docs version mudler/LocalAI by
[@&trueforge-org#8203;localai-bot](https://togithub.com/localai-bot) in
[https://github.com/mudler/LocalAI/pull/1557](https://togithub.com/mudler/LocalAI/pull/1557)

**Full Changelog**:
mudler/LocalAI@v2.4.1...v2.5.0

</details>

---

### Configuration

📅 **Schedule**: Branch creation - "before 10pm on monday" in timezone
Europe/Amsterdam, Automerge - At any time (no schedule defined).

🚦 **Automerge**: Enabled.

♻ **Rebasing**: Whenever PR becomes conflicted, or you tick the
rebase/retry checkbox.

🔕 **Ignore**: Close this PR and you won't be reminded about this update
again.

---

- [ ] <!-- rebase-check -->If you want to rebase/retry this PR, check
this box

---

This PR has been generated by [Renovate
Bot](https://togithub.com/renovatebot/renovate).

<!--renovate-debug:eyJjcmVhdGVkSW5WZXIiOiIzNy4xMjcuMCIsInVwZGF0ZWRJblZlciI6IjM3LjEyNy4wIiwidGFyZ2V0QnJhbmNoIjoibWFzdGVyIn0=-->
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants